18 research outputs found
XR Input Error Mediation for Hand-Based Input: Task and Context Influences a User's Preference
Many XR devices use bare-hand gestures to reduce the need for handheld
controllers. Such gestures, however, lead to false positive and false negative
recognition errors, which detract from the user experience. While mediation
techniques enable users to overcome recognition errors by clarifying their
intentions via UI elements, little research has explored how mediation
techniques should be designed in XR and how a user's task and context may
impact their design preferences. This research presents empirical studies about
the impact of user perceived error costs on users' preferences for three
mediation technique designs, under different simulated scenarios that were
inspired by real-life tasks. Based on a large-scale crowd-sourced survey and an
immersive VR-based user study, our results suggest that the varying contexts
within each task type can impact users' perceived error costs, leading to
different preferred mediation techniques. We further discuss the study
implications of these results on future XR interaction design.Comment: IEEE International Symposium on Mixed and Augmented Reality (ISMAR),
202
VIRD: Immersive Match Video Analysis for High-Performance Badminton Coaching
Badminton is a fast-paced sport that requires a strategic combination of
spatial, temporal, and technical tactics. To gain a competitive edge at
high-level competitions, badminton professionals frequently analyze match
videos to gain insights and develop game strategies. However, the current
process for analyzing matches is time-consuming and relies heavily on manual
note-taking, due to the lack of automatic data collection and appropriate
visualization tools. As a result, there is a gap in effectively analyzing
matches and communicating insights among badminton coaches and players. This
work proposes an end-to-end immersive match analysis pipeline designed in close
collaboration with badminton professionals, including Olympic and national
coaches and players. We present VIRD, a VR Bird (i.e., shuttle) immersive
analysis tool, that supports interactive badminton game analysis in an
immersive environment based on 3D reconstructed game views of the match video.
We propose a top-down analytic workflow that allows users to seamlessly move
from a high-level match overview to a detailed game view of individual rallies
and shots, using situated 3D visualizations and video. We collect 3D spatial
and dynamic shot data and player poses with computer vision models and
visualize them in VR. Through immersive visualizations, coaches can
interactively analyze situated spatial data (player positions, poses, and shot
trajectories) with flexible viewpoints while navigating between shots and
rallies effectively with embodied interaction. We evaluated the usefulness of
VIRD with Olympic and national-level coaches and players in real matches.
Results show that immersive analytics supports effective badminton match
analysis with reduced context-switching costs and enhances spatial
understanding with a high sense of presence.Comment: To Appear in IEEE Transactions on Visualization and Computer Graphics
(IEEE VIS), 202
RL-LABEL: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios
Labels are widely used in augmented reality (AR) to display digital
information. Ensuring the readability of AR labels requires placing them
occlusion-free while keeping visual linkings legible, especially when multiple
labels exist in the scene. Although existing optimization-based methods, such
as force-based methods, are effective in managing AR labels in static
scenarios, they often struggle in dynamic scenarios with constantly moving
objects. This is due to their focus on generating layouts optimal for the
current moment, neglecting future moments and leading to sub-optimal or
unstable layouts over time. In this work, we present RL-LABEL, a deep
reinforcement learning-based method for managing the placement of AR labels in
scenarios involving moving objects. RL-LABEL considers the current and
predicted future states of objects and labels, such as positions and
velocities, as well as the user's viewpoint, to make informed decisions about
label placement. It balances the trade-offs between immediate and long-term
objectives. Our experiments on two real-world datasets show that RL-LABEL
effectively learns the decision-making process for long-term optimization,
outperforming two baselines (i.e., no view management and a force-based method)
by minimizing label occlusions, line intersections, and label movement
distance. Additionally, a user study involving 18 participants indicates that
RL-LABEL excels over the baselines in aiding users to identify, compare, and
summarize data on AR labels within dynamic scenes
iBall: Augmenting Basketball Videos with Gaze-moderated Embedded Visualizations
We present iBall, a basketball video-watching system that leverages
gaze-moderated embedded visualizations to facilitate game understanding and
engagement of casual fans. Video broadcasting and online video platforms make
watching basketball games increasingly accessible. Yet, for new or casual fans,
watching basketball videos is often confusing due to their limited basketball
knowledge and the lack of accessible, on-demand information to resolve their
confusion. To assist casual fans in watching basketball videos, we compared the
game-watching behaviors of casual and die-hard fans in a formative study and
developed iBall based on the fndings. iBall embeds visualizations into
basketball videos using a computer vision pipeline, and automatically adapts
the visualizations based on the game context and users' gaze, helping casual
fans appreciate basketball games without being overwhelmed. We confrmed the
usefulness, usability, and engagement of iBall in a study with 16 casual fans,
and further collected feedback from 8 die-hard fans.Comment: ACM CHI2
Grand Challenges in Immersive Analytics
The definitive version will be published in CHI 2021, May 8â13, 2021, Yokohama, JapanInternational audienceImmersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and humancomputer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics